Skip to content

Aspectrr/linkedin connect new#302

Draft
aspectrr wants to merge 761 commits intostickerdaniel:mainfrom
aspectrr:aspectrr/linkedin-connect-new
Draft

Aspectrr/linkedin connect new#302
aspectrr wants to merge 761 commits intostickerdaniel:mainfrom
aspectrr:aspectrr/linkedin-connect-new

Conversation

@aspectrr
Copy link
Copy Markdown
Contributor

No description provided.

stickerdaniel and others added 30 commits March 4, 2026 17:42
Replace "Flag-based section selection" with "explicit section
selection" to match refactored config dict approach.
- Dispatch on is_overlay in scrape_company for consistency
- Move nav delay before each navigation except first
- Patch asyncio.sleep in TestScrapePersonUrls/TestScrapeCompany
- Document unknown_sections in return format docs

Resolves: stickerdaniel#180
Assert exact number of page and overlay navigations in
test_all_sections_visit_all_urls to catch duplicate visits.
- Add unknown_sections to tool docstrings in person.py and company.py
- Add integration tests for unknown section names in both tools
- Document Greptile review endpoints in AGENTS.md
Patch _extract_overlay in test_posts_visits_recent_activity
for consistency with other TestScrapePersonUrls tests.
…r_scraping_replace_flag_enums_with_config_dicts

refactor(scraping): replace Flag enums with config dicts
…sync-tools-177

docs: sync manifest.json tools and features with current capabilities
…-file-maintenance

chore(deps): lock file maintenance
Lock file already has 3.1.0 since stickerdaniel#166; align pyproject.toml
floor to prevent accidental downgrades to v2.

Resolves: stickerdaniel#190
Lock file already has 3.1.0 since stickerdaniel#166; align pyproject.toml
floor to prevent accidental downgrades to v2.

Resolves: stickerdaniel#190

<!-- greptile_comment -->

<h3>Greptile Summary</h3>

This PR tightens the `fastmcp` minimum version constraint from `>=2.14.0` to `>=3.0.0` in `pyproject.toml` (and the corresponding `uv.lock` metadata), preventing any future resolver from backtracking to the incompatible v2 series. The lock file has already been pinning `fastmcp==3.1.0` since PR stickerdaniel#166, so there is no runtime impact — this is purely a spec/metadata alignment.

- `pyproject.toml`: `fastmcp` floor raised to `>=3.0.0`
- `uv.lock`: `package.metadata.requires-dist` updated to match; the resolved package entry (`3.1.0`) is unchanged
- No upper-bound cap (`<4.0.0`) is set, which is consistent with the project's existing open-ended constraints for all other dependencies

<h3>Confidence Score: 5/5</h3>

- This PR is safe to merge — it is a pure metadata alignment with no functional or runtime impact.
- The locked version was already `3.1.0` before this PR; the only change is raising the declared floor to match. Both modified lines are trivially correct, consistent with each other, and have no side-effects on the installed environment.
- No files require special attention.

<h3>Important Files Changed</h3>




| Filename | Overview |
|----------|----------|
| pyproject.toml | Single-line change updating the `fastmcp` floor constraint from `>=2.14.0` to `>=3.0.0`, aligning with the already-resolved version in the lock file. |
| uv.lock | Auto-generated lock file metadata updated to reflect the new `>=3.0.0` specifier; the resolved `fastmcp` version (3.1.0) was already correct and unchanged. |

</details>



<h3>Flowchart</h3>

```mermaid
%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["pyproject.toml\nfastmcp >=3.0.0"] -->|uv resolves| B["uv.lock\nfastmcp 3.1.0 (pinned)"]
    B --> C["Installed environment\nfastmcp 3.1.0"]
    D["Old constraint\nfastmcp >=2.14.0"] -. "could resolve to" .-> E["fastmcp 2.x\n(incompatible)"]
    style D fill:#f9d0d0,stroke:#c00
    style E fill:#f9d0d0,stroke:#c00
    style A fill:#d0f0d0,stroke:stickerdaniel#60
    style B fill:#d0f0d0,stroke:stickerdaniel#60
    style C fill:#d0f0d0,stroke:stickerdaniel#60
```

<sub>Last reviewed commit: 7d2363e</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
Replace dict-returning handle_tool_error() with raise_tool_error()
that raises FastMCP ToolError for known exceptions. Unknown exceptions
re-raise as-is for mask_error_details=True to handle.

Resolves: stickerdaniel#185
Add logger.error with exc_info for unknown exceptions before re-raising,
and add test coverage for AuthenticationError and ElementNotFoundError.
Re-add optional context parameter to raise_tool_error() for log
correlation, and add test for base LinkedInScraperException branch.
Add catch-all comment on base exception branch and NoReturn
inline comments on all raise_tool_error() call sites.
…eps_bump_fastmcp_constraint_to_3.0.0

refactor(error-handler): replace handle_tool_error with ToolError
Replace repeated ensure_authenticated/get_or_create_browser/
LinkedInExtractor boilerplate in all 6 tool functions with
FastMCP Depends()-based dependency injection via a single
get_extractor() factory in dependencies.py.

Resolves: stickerdaniel#186
Updated the get_extractor function to route errors through raise_tool_error, ensuring that MCP clients receive structured ToolError responses for authentication failures. Added a test to verify that authentication errors are correctly handled and produce the expected ToolError response.
…r_tools_use_depends_to_inject_extractor

refactor(tools): Use Depends() to inject extractor
Replace ToolAnnotations(...) with plain dicts, move title to
top-level @mcp.tool() param, and add category tags to all tools.

Resolves: stickerdaniel#189
…ickerdaniel#198)

Replace ToolAnnotations(...) with plain dicts, move title to
top-level @mcp.tool() param, and add category tags to all tools.

Resolves: stickerdaniel#189

<!-- greptile_comment -->

<h3>Greptile Summary</h3>

This PR is a clean, well-scoped refactoring that modernises tool metadata across all four changed files to align with the FastMCP 3.x API. It introduces no functional or behavioural changes.

Key changes:
- Removes the `ToolAnnotations(...)` Pydantic wrapper in `company.py`, `job.py`, and `person.py`, replacing it with plain `dict` syntax for the `annotations` parameter — the simpler form supported by FastMCP 3.x.
- Moves `title` from inside `ToolAnnotations` to a top-level keyword argument on `@mcp.tool()`, matching the updated FastMCP 3.x decorator signature.
- Drops the now-redundant `destructiveHint=False` from all read-only tools. Per the MCP spec, `destructiveHint` is only meaningful when `readOnlyHint` is `false`, so omitting it from tools that already declare `readOnlyHint=True` is semantically equivalent.
- Adds `tags` (as Python `set` literals) to every tool for categorisation (`"company"`, `"job"`, `"person"`, `"scraping"`, `"search"`, `"session"`).
- Enriches the previously unannotated `close_session` tool in `server.py` with a title, `destructiveHint=True`, and the `"session"` tag — accurately describing its destructive nature.

The existing test suite in `tests/test_tools.py` covers all tool functions but does not assert on annotation metadata, so no test changes are required. The refactoring is consistent across all tool files and fits naturally within the project's layered registration pattern.

<h3>Confidence Score: 5/5</h3>

- This PR is safe to merge — it is a pure metadata/annotation refactoring with no changes to tool logic, inputs, outputs, or error handling.
- All changes are limited to decorator parameters (`title`, `annotations`, `tags`). The `annotations` dict values are semantically equivalent to the removed `ToolAnnotations` objects, `destructiveHint=False` is correctly dropped only for `readOnlyHint=True` tools, and the new `close_session` annotations accurately reflect its destructive nature. No business logic, scraping behaviour, or error paths were altered.
- No files require special attention.

<h3>Flowchart</h3>

```mermaid
%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["@mcp.tool() decorator"] --> B{Annotation style}
    B -->|Before| C["ToolAnnotations(title=..., readOnlyHint=..., destructiveHint=False, openWorldHint=...)"]
    B -->|After| D["title='...' (top-level param)\nannotations={'readOnlyHint': True, 'openWorldHint': True}\ntags={'category', 'type'}"]
    D --> E["person tools\n(get_person_profile, search_people)"]
    D --> F["company tools\n(get_company_profile, get_company_posts)"]
    D --> G["job tools\n(get_job_details, search_jobs)"]
    D --> H["session tool\n(close_session)\nannotations={'destructiveHint': True}"]
```

<sub>Last reviewed commit: c5bf554</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
Use lowercase dict instead of Dict, add auth validation log line
stickerdaniel and others added 18 commits March 30, 2026 15:28
…ependencies

chore(deps): update ci dependencies
- Replace custom _secure_profile_dirs/_set_private_mode with thin
  _harden_linkedin_tree that uses secure_mkdir from common_utils
- Fix export_storage_state: chmod 0o600 after Playwright writes
- Add test for export_storage_state permission hardening
- Add test for no-op outside .linkedin-mcp tree
- Revert unrelated loaders.py change
…e-profile-perms

Harden .linkedin-mcp profile/cookie permissions
- Remove unused selector constants (_MESSAGING_THREAD_LINK_SELECTOR, _MESSAGING_RESULT_ITEM_SELECTOR, _MESSAGING_SEND_SELECTOR)
- Remove dead _conversation_thread_cache (new extractor per tool call)
- Add AuthenticationError handling to get_sidebar_profiles and all messaging tools
- Pass CSS selector as evaluate() arg instead of f-string interpolation
- Replace deprecated execCommand with press_sequentially
- Guard sidebar container walk against depth-limit exhaustion
- Update scrape_person docstring to document profile_urn return key
- Add messaging tools to README tool-status table
LinkedIn redirects /messaging/ to the most recent thread; capture
baseline_thread_id after the SPA settles so search-selected threads
can be distinguished from the auto-opened one.
…connect

feat: linkedin messaging, get sidebar profiles
…IDs (stickerdaniel#300)

* fix(scraping): Respect --timeout for messaging, recognize thread URLs

Remove all hardcoded timeout=5000 from the send_message flow and
messaging helpers so they fall through to the page-level default
set from BrowserConfig.default_timeout (configurable via --timeout).

Also add /messaging/thread/ URL recognition to classify_link so
conversation thread references are captured when they appear in
search results or conversation detail views. Raise inbox reference
cap to 30 and add proper section context labels.

Resolves: stickerdaniel#296
See also: stickerdaniel#297

* fix(scraping): Extract conversation thread IDs from inbox via click-and-capture

LinkedIn's conversation sidebar uses JS click handlers instead of <a>
tags, so anchor extraction cannot capture thread IDs. Click each
conversation item and read the resulting SPA URL change to build
conversation references with thread_id and participant name.

Before: get_inbox returned 2 references (active conversation only)
After: get_inbox returns all conversation thread IDs (10+ refs)

Resolves: stickerdaniel#297

* fix(scraping): Respect --timeout across all remaining scraping methods

Remove the remaining 10 hardcoded timeout=5000 from profile scraping,
connection flow, modal detection, sidebar profiles, conversation
resolution, and job search. All Playwright calls now use the page-level
default from BrowserConfig.default_timeout.

Resolves: stickerdaniel#299

* fix: Address PR review feedback

- Use saved inbox URL instead of self._page.url (P1: wrong URL after clicks)
- Fix docstring to clarify 2s recipient-picker probe is intentional
- Replace class-name selectors with aria-label discovery + minimal class fallback
- Dedupe references after merging conversation and anchor refs
Copilot AI review requested due to automatic review settings March 31, 2026 22:24
@aspectrr aspectrr marked this pull request as draft March 31, 2026 22:24
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an MCP “notifications” feature intended to let clients subscribe to linkedin://notifications and receive push updates when a background poller detects new LinkedIn messages or connection approvals.

Changes:

  • Introduces an MCP notifications resource plus subscribe_notifications / unsubscribe_notifications tools.
  • Adds a background poller with persisted “already seen” state and a session registry that sends ResourceUpdatedNotification.
  • Extends the scraper/extractor with helper methods to poll inbox thread IDs and detect connection-approval notification snippets.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
linkedin_mcp_server/tools/notifications.py Defines MCP resource + subscribe/unsubscribe tools for notification delivery.
linkedin_mcp_server/scraping/extractor.py Adds polling helpers (thread IDs + connection approval snippets) and broadens “More” button selector strategy.
linkedin_mcp_server/scraper_lock.py Introduces a purportedly shared asyncio lock for poller/tool serialization.
linkedin_mcp_server/notifications/poller.py Implements the background polling loop that should push updates to subscribed sessions.
linkedin_mcp_server/notifications/registry.py Stores subscribed sessions + pending events; sends ResourceUpdatedNotification to sessions.
linkedin_mcp_server/notifications/state.py Persists “seen” notification state to disk for deduping.
linkedin_mcp_server/notifications/init.py Package marker for notifications subsystem.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +28 to +30
def register_notification_tools(mcp: FastMCP) -> None:
"""Register the notifications resource and subscribe/unsubscribe tools."""

Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

register_notification_tools() is defined here but it isn’t registered anywhere in the server setup, so the notifications resource/tools won’t be exposed at runtime. Wire this into the central tool-registration path (e.g., the MCP server factory) so clients can actually call subscribe_notifications() / read linkedin://notifications.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +4
"""Shared asyncio lock for serializing all browser operations.

Both SequentialToolExecutionMiddleware and the notification poller import this
module so they contend on the same lock, preventing concurrent browser use.
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is intended to provide a shared lock with SequentialToolExecutionMiddleware, but the middleware currently uses its own private asyncio.Lock() and never calls get_scraper_lock(). As a result, the poller can still run concurrently with tool calls and race the shared browser/page; either update the middleware to use get_scraper_lock() or remove/adjust this module/docstring so the actual locking behavior matches the intent.

Suggested change
"""Shared asyncio lock for serializing all browser operations.
Both SequentialToolExecutionMiddleware and the notification poller import this
module so they contend on the same lock, preventing concurrent browser use.
"""Shared asyncio lock for serializing browser operations.
This module exposes a process-wide asyncio.Lock via get_scraper_lock(). Any
component that needs to prevent concurrent use of the shared browser/page
should import this module and acquire the lock around its operations.

Copilot uses AI. Check for mistakes.
Comment on lines +41 to +56
def save_state(state: NotificationState) -> None:
"""Persist notification state to disk."""
try:
_STATE_PATH.parent.mkdir(mode=0o700, parents=True, exist_ok=True)
_STATE_PATH.write_text(
json.dumps(
{
"last_message_thread_ids": sorted(state.last_message_thread_ids),
"last_connection_approval_texts": sorted(
state.last_connection_approval_texts
),
},
indent=2,
),
encoding="utf-8",
)
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

save_state() writes the on-disk notification state with Path.write_text(), which can leave the file with overly-permissive permissions and is not an atomic write. Since this state may contain sensitive message thread IDs / notification text, use the existing secure_write_text(..., mode=0o600) helper (and secure_mkdir) to ensure owner-only permissions and atomic updates.

Copilot uses AI. Check for mistakes.
Comment on lines +51 to +56
"""Return all pending events and clear the queue."""
events = list(_pending_events)
_pending_events.clear()
return events


Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notify_all() / the linkedin://notifications resource uses a single global _pending_events queue that is drained on read. With multiple subscribed sessions, whichever client reads first will consume events for all other clients, even though they also received ResourceUpdatedNotification. Consider making the pending queue per-session (or per-subscriber cursor) so event delivery is consistent for multiple MCP clients.

Suggested change
"""Return all pending events and clear the queue."""
events = list(_pending_events)
_pending_events.clear()
return events
"""Return a snapshot of all pending events without clearing the queue."""
return list(_pending_events)

Copilot uses AI. Check for mistakes.
Comment on lines +69 to +75
async def notification_poller() -> None:
"""Long-running coroutine that polls LinkedIn and pushes MCP notifications."""
logger.info("Notification poller started (interval=%ds)", POLL_INTERVAL_SECONDS)
while True:
await asyncio.sleep(POLL_INTERVAL_SECONDS)
logger.debug("Notification poll starting")
try:
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notification_poller() is defined but not started anywhere (no references found), so no background polling/notifications will occur. Start it during the server lifespan (create an asyncio task) and ensure it is cancelled/awaited during shutdown to avoid orphaned tasks.

Copilot uses AI. Check for mistakes.
Comment on lines +71 to +74
logger.info("Notification poller started (interval=%ds)", POLL_INTERVAL_SECONDS)
while True:
await asyncio.sleep(POLL_INTERVAL_SECONDS)
logger.debug("Notification poll starting")
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The poll loop sleeps before doing the first poll, and CancelledError raised during that initial sleep won’t be caught/logged by the current try/except (since it begins after the sleep). Consider polling immediately on startup and wrapping the sleep inside the try so cancellation is handled consistently.

Copilot uses AI. Check for mistakes.
Comment on lines +2176 to +2177
await self._wait_for_main_text(log_context="Messaging inbox")

Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_inbox_thread_ids() doesn’t call handle_modal_close() (unlike get_inbox() and other messaging flows). If a modal is present, the JS click loop may not navigate, producing empty/missing thread IDs. Align this method with get_inbox() by closing modals (and consider doing the same minimal scrolling used in get_inbox() if you want limit > currently-rendered items).

Suggested change
await self._wait_for_main_text(log_context="Messaging inbox")
await self._wait_for_main_text(log_context="Messaging inbox")
await handle_modal_close(self._page)
# Mirror get_inbox(): perform minimal scrolling so enough
# conversations are rendered for the requested limit.
scrolls = max(1, limit // 10)
await self._scroll_main_scrollable_region(
position="bottom", attempts=scrolls, pause_time=0.5
)

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +7
"""Background notification poller.

Runs as a long-lived asyncio task started during server lifespan. Every
POLL_INTERVAL_SECONDS it acquires the shared scraper lock, polls LinkedIn for
new messages and connection approvals, and pushes a ResourceUpdatedNotification
to all subscribed MCP sessions when changes are detected.
"""
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New notification polling / registry / persistence code is being introduced without any tests. Since the repo has substantial pytest coverage (e.g. tests/test_tools.py, tests/test_server.py), please add tests that cover: state load/save behavior, registry drain semantics, and that the poller triggers notify_all() when new events are detected (mocking the extractor/browser).

Copilot uses AI. Check for mistakes.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 31, 2026

Greptile Summary

This PR adds a background LinkedIn notification polling system — a notifications/ package with a poller, session registry, and persistent state, plus a new scraper_lock.py for shared browser serialization, and get_inbox_thread_ids / get_connection_approval_notifications extractor methods. The feature design is reasonable in structure, but has three P1 defects that prevent it from working correctly and introduce a user-facing side effect.

Key issues:

  • Notification system is dead coderegister_notification_tools(mcp) is never called in server.py, and notification_poller() is never started as a background task in browser_lifespan. None of the new functionality is reachable.
  • Scraper lock is not actually sharedscraper_lock.py claims that SequentialToolExecutionMiddleware and the poller share a lock, but the middleware still uses its own private self._lock. The poller can run concurrently with active tool calls, risking browser state corruption.
  • get_inbox_thread_ids marks messages as read — the extractor navigates to each conversation by clicking on it inside page.evaluate, which triggers LinkedIn's read-receipt and marks up to 10 threads as read per poll cycle. It also uses a class-name-based selector (div[class*="listitem__link"]) which violates the project's scraping rules.
  • Connection approval detection is English-only and fragile — matching on "accepted" / "connected" in plain text will produce false positives across other notification types and break for non-English LinkedIn locales.

Confidence Score: 2/5

Not safe to merge — the feature is entirely unwired (dead code), the browser lock guarantee is broken, and the inbox polling marks user messages as read

Three independent P1 defects: (1) neither the poller task nor the notification tools are registered in server.py so the whole feature ships as dead code; (2) the new scraper_lock module is not used by SequentialToolExecutionMiddleware so concurrent browser access is unguarded; (3) get_inbox_thread_ids clicks each conversation to extract its thread ID, which marks messages as read on every 5-minute poll cycle — an irreversible, user-visible side effect.

linkedin_mcp_server/scraping/extractor.py (side-effecting inbox method, class-name selectors), linkedin_mcp_server/scraper_lock.py (lock not shared with middleware), linkedin_mcp_server/server.py (poller task and notification tools never wired up)

Important Files Changed

Filename Overview
linkedin_mcp_server/scraping/extractor.py Adds get_inbox_thread_ids (marks messages as read via clicks, uses banned class selectors) and fragile English-only connection approval detection
linkedin_mcp_server/scraper_lock.py Shared scraper lock module, but SequentialToolExecutionMiddleware still uses its own private lock — the shared-lock guarantee is broken
linkedin_mcp_server/notifications/poller.py New background poller; never started in server lifespan, and state reload-per-cycle means save failures cause duplicate events
linkedin_mcp_server/tools/notifications.py New MCP resource + subscribe/unsubscribe tools for notifications; well-structured but never registered in server.py
linkedin_mcp_server/notifications/registry.py New session registry using WeakSet and module-level event queue; logic is sound but never wired up
linkedin_mcp_server/notifications/state.py Persistent state for seen thread IDs and approval texts; straightforward JSON serialization to ~/.linkedin-mcp/
linkedin_mcp_server/notifications/init.py Package init with module docstring; no issues

Sequence Diagram

sequenceDiagram
    participant Client as MCP Client
    participant Server as FastMCP Server
    participant Poller as notification_poller (bg task)
    participant Registry as notifications/registry.py
    participant Extractor as LinkedInExtractor
    participant State as notification-state.json

    Note over Server,Poller: ⚠️ Poller never started in browser_lifespan
    Note over Server,Registry: ⚠️ register_notification_tools never called

    Client->>Server: subscribe_notifications()
    Server->>Registry: add_session(ctx.session)
    Registry-->>Client: Subscribed

    loop Every 300s
        Poller->>Poller: asyncio.sleep(300)
        Poller->>Extractor: get_inbox_thread_ids(limit=10)
        Note over Extractor: ⚠️ Clicks each conversation (marks as read)
        Extractor-->>Poller: [thread_ids]
        Poller->>Extractor: get_connection_approval_notifications()
        Note over Extractor: ⚠️ Text-matches accepted/connected (English only)
        Extractor-->>Poller: [snippets]
        Poller->>State: load_state() / save_state()
        Poller->>Registry: add_events(events)
        Poller->>Registry: notify_all()
        Registry->>Client: ResourceUpdatedNotification
        Client->>Server: GET linkedin://notifications
        Server->>Registry: drain_events()
        Registry-->>Client: [events]
    end
Loading

Comments Outside Diff (1)

  1. linkedin_mcp_server/server.py, line 49-62 (link)

    P1 Notification system is never wired into the server

    register_notification_tools(mcp) is never called in create_mcp_server(), and notification_poller() is never started as a background task in browser_lifespan. As a result, the entire notifications module (poller.py, registry.py, state.py, tools/notifications.py) is dead code — the tools are never registered and the background poller never runs.

    Two additions are needed in server.py:

    # In browser_lifespan, start the poller task:
    from linkedin_mcp_server.notifications.poller import notification_poller
    import asyncio
    
    @lifespan
    async def browser_lifespan(app: FastMCP) -> AsyncIterator[dict[str, Any]]:
        ...
        poller_task = asyncio.create_task(notification_poller(), name="notification-poller")
        yield {}
        poller_task.cancel()
        await close_browser()

    And in create_mcp_server():

    from linkedin_mcp_server.tools.notifications import register_notification_tools
    ...
    register_notification_tools(mcp)
    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: linkedin_mcp_server/server.py
    Line: 49-62
    
    Comment:
    **Notification system is never wired into the server**
    
    `register_notification_tools(mcp)` is never called in `create_mcp_server()`, and `notification_poller()` is never started as a background task in `browser_lifespan`. As a result, the entire notifications module (`poller.py`, `registry.py`, `state.py`, `tools/notifications.py`) is dead code — the tools are never registered and the background poller never runs.
    
    Two additions are needed in `server.py`:
    
    ```python
    # In browser_lifespan, start the poller task:
    from linkedin_mcp_server.notifications.poller import notification_poller
    import asyncio
    
    @lifespan
    async def browser_lifespan(app: FastMCP) -> AsyncIterator[dict[str, Any]]:
        ...
        poller_task = asyncio.create_task(notification_poller(), name="notification-poller")
        yield {}
        poller_task.cancel()
        await close_browser()
    ```
    
    And in `create_mcp_server()`:
    ```python
    from linkedin_mcp_server.tools.notifications import register_notification_tools
    ...
    register_notification_tools(mcp)
    ```
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: linkedin_mcp_server/server.py
Line: 49-62

Comment:
**Notification system is never wired into the server**

`register_notification_tools(mcp)` is never called in `create_mcp_server()`, and `notification_poller()` is never started as a background task in `browser_lifespan`. As a result, the entire notifications module (`poller.py`, `registry.py`, `state.py`, `tools/notifications.py`) is dead code — the tools are never registered and the background poller never runs.

Two additions are needed in `server.py`:

```python
# In browser_lifespan, start the poller task:
from linkedin_mcp_server.notifications.poller import notification_poller
import asyncio

@lifespan
async def browser_lifespan(app: FastMCP) -> AsyncIterator[dict[str, Any]]:
    ...
    poller_task = asyncio.create_task(notification_poller(), name="notification-poller")
    yield {}
    poller_task.cancel()
    await close_browser()
```

And in `create_mcp_server()`:
```python
from linkedin_mcp_server.tools.notifications import register_notification_tools
...
register_notification_tools(mcp)
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: linkedin_mcp_server/scraper_lock.py
Line: 1-19

Comment:
**Scraper lock not shared with `SequentialToolExecutionMiddleware`**

The module docstring states: *"Both `SequentialToolExecutionMiddleware` and the notification poller import this module so they contend on the same lock."* This is incorrect. `SequentialToolExecutionMiddleware` creates its own private lock (`self._lock = asyncio.Lock()` in `__init__`); it never calls `get_scraper_lock()`. The poller acquires `get_scraper_lock()` while tool execution is serialised by a completely different lock instance, so concurrent browser access between a running tool and the poller is not actually prevented.

To fix this, `SequentialToolExecutionMiddleware` needs to be updated to use `get_scraper_lock()` instead of its own lock:

```python
# sequential_tool_middleware.py
from linkedin_mcp_server.scraper_lock import get_scraper_lock

class SequentialToolExecutionMiddleware(Middleware):
    async def on_call_tool(self, context, call_next):
        async with get_scraper_lock():
            ...
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: linkedin_mcp_server/scraping/extractor.py
Line: 2186-2193

Comment:
**Clicking conversations marks them as read on LinkedIn**

`get_inbox_thread_ids` extracts thread IDs by literally clicking each conversation item in the inbox, which triggers LinkedIn's read-receipt logic and marks messages as unread → read for the user. The background poller calls this on every cycle, silently marking up to 10 conversations as read every 5 minutes even when the user hasn't opened them.

Additionally, `div[class*="listitem__link"]` is a class-name-based selector, which violates the CLAUDE.md scraping rule: *"never class names tied to LinkedIn's layout."*

A better approach is to extract thread IDs from the `href` attributes of the `<a>` anchor elements rendered in the conversation list, without clicking:

```js
const anchors = Array.from(document.querySelectorAll(
    'main a[href*="/messaging/thread/"]'
));
const results = [];
for (let i = 0; i < Math.min(anchors.length, limit); i++) {
    const match = anchors[i].href.match(/\/messaging\/thread\/([^/?#]+)/);
    if (match) results.push({ threadId: match[1] });
}
return results;
```

This approach is non-destructive and does not rely on class names.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: linkedin_mcp_server/scraping/extractor.py
Line: 2206-2229

Comment:
**Connection approval detection is English-only and prone to false positives**

`get_connection_approval_notifications` identifies connection approvals by scanning notification text for the words `"accepted"` or `"connected"`. This will:

1. Produce false positives for unrelated notifications containing these words (e.g. *"You've been accepted into a group"*, *"Your application was accepted"*).
2. Miss connection approvals entirely for users with LinkedIn set to a non-English locale.
3. Generate a different snippet text for the same event on repeated polls if the notification page content changes slightly (e.g. timestamps are rewritten), causing the same approval to be re-emitted.

Consider targeting connection-specific structured elements instead (e.g. notification items with a specific `data-` attribute or icon type), or at a minimum anchoring the match to the grammatical patterns LinkedIn actually uses (*"accepted your connection request"*, *"is now connected with you"*).

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: linkedin_mcp_server/scraping/extractor.py
Line: 539-543

Comment:
**Class-name selector violates scraping rules**

`"button.artdeco-button[aria-label*='More']"` uses a LinkedIn design-system class name (`artdeco-button`), which directly violates the CLAUDE.md guideline: *"When DOM access is unavoidable, use minimal generic selectors — never class names tied to LinkedIn's layout."*

Since `aria-label*='More'` already constrains the match, the class prefix adds no resilience and is more brittle than it appears. Remove this selector from the fallback list; the three preceding `aria-label`-based and `:has-text` selectors already provide sufficient coverage without relying on a layout class.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: linkedin_mcp_server/notifications/poller.py
Line: 80-85

Comment:
**State is reloaded from disk on every poll cycle, discarding in-memory updates on failure**

`load_state()` is called inside the lock on every cycle. If `save_state(state)` fails silently (it catches all exceptions) after `state` was mutated, the next cycle reloads the pre-update state from disk and the same events are re-emitted to clients.

Consider loading the state once at poller startup and keeping it in memory across cycles, only reloading from disk if the in-memory state is unavailable. This pattern also eliminates repeated disk I/O.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "feat: add ability to listen for connecti..." | Re-trigger Greptile

Comment on lines +1 to +19
"""Shared asyncio lock for serializing all browser operations.

Both SequentialToolExecutionMiddleware and the notification poller import this
module so they contend on the same lock, preventing concurrent browser use.
"""

from __future__ import annotations

import asyncio

_lock: asyncio.Lock | None = None


def get_scraper_lock() -> asyncio.Lock:
"""Return the process-wide scraper lock, creating it on first call."""
global _lock
if _lock is None:
_lock = asyncio.Lock()
return _lock
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Scraper lock not shared with SequentialToolExecutionMiddleware

The module docstring states: "Both SequentialToolExecutionMiddleware and the notification poller import this module so they contend on the same lock." This is incorrect. SequentialToolExecutionMiddleware creates its own private lock (self._lock = asyncio.Lock() in __init__); it never calls get_scraper_lock(). The poller acquires get_scraper_lock() while tool execution is serialised by a completely different lock instance, so concurrent browser access between a running tool and the poller is not actually prevented.

To fix this, SequentialToolExecutionMiddleware needs to be updated to use get_scraper_lock() instead of its own lock:

# sequential_tool_middleware.py
from linkedin_mcp_server.scraper_lock import get_scraper_lock

class SequentialToolExecutionMiddleware(Middleware):
    async def on_call_tool(self, context, call_next):
        async with get_scraper_lock():
            ...
Prompt To Fix With AI
This is a comment left during a code review.
Path: linkedin_mcp_server/scraper_lock.py
Line: 1-19

Comment:
**Scraper lock not shared with `SequentialToolExecutionMiddleware`**

The module docstring states: *"Both `SequentialToolExecutionMiddleware` and the notification poller import this module so they contend on the same lock."* This is incorrect. `SequentialToolExecutionMiddleware` creates its own private lock (`self._lock = asyncio.Lock()` in `__init__`); it never calls `get_scraper_lock()`. The poller acquires `get_scraper_lock()` while tool execution is serialised by a completely different lock instance, so concurrent browser access between a running tool and the poller is not actually prevented.

To fix this, `SequentialToolExecutionMiddleware` needs to be updated to use `get_scraper_lock()` instead of its own lock:

```python
# sequential_tool_middleware.py
from linkedin_mcp_server.scraper_lock import get_scraper_lock

class SequentialToolExecutionMiddleware(Middleware):
    async def on_call_tool(self, context, call_next):
        async with get_scraper_lock():
            ...
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +2186 to +2193
const ariaLabel = label.getAttribute('aria-label') || '';
const name = ariaLabel
.replace(/^Select conversation with\\s*/i, '').trim();
const clickTarget = label.closest('li')
?.querySelector('div[class*="listitem__link"]');
if (!clickTarget) continue;
clickTarget.click();
await new Promise(r => setTimeout(r, 300));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Clicking conversations marks them as read on LinkedIn

get_inbox_thread_ids extracts thread IDs by literally clicking each conversation item in the inbox, which triggers LinkedIn's read-receipt logic and marks messages as unread → read for the user. The background poller calls this on every cycle, silently marking up to 10 conversations as read every 5 minutes even when the user hasn't opened them.

Additionally, div[class*="listitem__link"] is a class-name-based selector, which violates the CLAUDE.md scraping rule: "never class names tied to LinkedIn's layout."

A better approach is to extract thread IDs from the href attributes of the <a> anchor elements rendered in the conversation list, without clicking:

const anchors = Array.from(document.querySelectorAll(
    'main a[href*="/messaging/thread/"]'
));
const results = [];
for (let i = 0; i < Math.min(anchors.length, limit); i++) {
    const match = anchors[i].href.match(/\/messaging\/thread\/([^/?#]+)/);
    if (match) results.push({ threadId: match[1] });
}
return results;

This approach is non-destructive and does not rely on class names.

Prompt To Fix With AI
This is a comment left during a code review.
Path: linkedin_mcp_server/scraping/extractor.py
Line: 2186-2193

Comment:
**Clicking conversations marks them as read on LinkedIn**

`get_inbox_thread_ids` extracts thread IDs by literally clicking each conversation item in the inbox, which triggers LinkedIn's read-receipt logic and marks messages as unread → read for the user. The background poller calls this on every cycle, silently marking up to 10 conversations as read every 5 minutes even when the user hasn't opened them.

Additionally, `div[class*="listitem__link"]` is a class-name-based selector, which violates the CLAUDE.md scraping rule: *"never class names tied to LinkedIn's layout."*

A better approach is to extract thread IDs from the `href` attributes of the `<a>` anchor elements rendered in the conversation list, without clicking:

```js
const anchors = Array.from(document.querySelectorAll(
    'main a[href*="/messaging/thread/"]'
));
const results = [];
for (let i = 0; i < Math.min(anchors.length, limit); i++) {
    const match = anchors[i].href.match(/\/messaging\/thread\/([^/?#]+)/);
    if (match) results.push({ threadId: match[1] });
}
return results;
```

This approach is non-destructive and does not rely on class names.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +2206 to 2229

async def get_connection_approval_notifications(self) -> list[str]:
"""Get connection approval notification texts from LinkedIn notifications page."""
url = "https://www.linkedin.com/notifications/"
await self._navigate_to_page(url)
await detect_rate_limit(self._page)
await self._wait_for_main_text(log_context="Notifications")
await handle_modal_close(self._page)

raw_result = await self._extract_root_content(["main"])
raw_text = raw_result.get("text", "")

snippets: list[str] = []
if raw_text:
for line in raw_text.split("\n"):
if "accepted" in line.lower() or "connected" in line.lower():
cleaned = line.strip()
if cleaned and len(cleaned) > 10:
snippets.append(cleaned[:200])
return snippets

async def _extract_conversation_thread_refs(self, limit: int) -> list[Reference]:
"""Click each inbox conversation item and capture the thread URL.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Connection approval detection is English-only and prone to false positives

get_connection_approval_notifications identifies connection approvals by scanning notification text for the words "accepted" or "connected". This will:

  1. Produce false positives for unrelated notifications containing these words (e.g. "You've been accepted into a group", "Your application was accepted").
  2. Miss connection approvals entirely for users with LinkedIn set to a non-English locale.
  3. Generate a different snippet text for the same event on repeated polls if the notification page content changes slightly (e.g. timestamps are rewritten), causing the same approval to be re-emitted.

Consider targeting connection-specific structured elements instead (e.g. notification items with a specific data- attribute or icon type), or at a minimum anchoring the match to the grammatical patterns LinkedIn actually uses ("accepted your connection request", "is now connected with you").

Prompt To Fix With AI
This is a comment left during a code review.
Path: linkedin_mcp_server/scraping/extractor.py
Line: 2206-2229

Comment:
**Connection approval detection is English-only and prone to false positives**

`get_connection_approval_notifications` identifies connection approvals by scanning notification text for the words `"accepted"` or `"connected"`. This will:

1. Produce false positives for unrelated notifications containing these words (e.g. *"You've been accepted into a group"*, *"Your application was accepted"*).
2. Miss connection approvals entirely for users with LinkedIn set to a non-English locale.
3. Generate a different snippet text for the same event on repeated polls if the notification page content changes slightly (e.g. timestamps are rewritten), causing the same approval to be re-emitted.

Consider targeting connection-specific structured elements instead (e.g. notification items with a specific `data-` attribute or icon type), or at a minimum anchoring the match to the grammatical patterns LinkedIn actually uses (*"accepted your connection request"*, *"is now connected with you"*).

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +539 to +543
# Try multiple selectors for the More button
more_selectors = [
"main button[aria-label*='More']",
"main button:has-text('More')",
"button[aria-label*='More']",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Class-name selector violates scraping rules

"button.artdeco-button[aria-label*='More']" uses a LinkedIn design-system class name (artdeco-button), which directly violates the CLAUDE.md guideline: "When DOM access is unavoidable, use minimal generic selectors — never class names tied to LinkedIn's layout."

Since aria-label*='More' already constrains the match, the class prefix adds no resilience and is more brittle than it appears. Remove this selector from the fallback list; the three preceding aria-label-based and :has-text selectors already provide sufficient coverage without relying on a layout class.

Prompt To Fix With AI
This is a comment left during a code review.
Path: linkedin_mcp_server/scraping/extractor.py
Line: 539-543

Comment:
**Class-name selector violates scraping rules**

`"button.artdeco-button[aria-label*='More']"` uses a LinkedIn design-system class name (`artdeco-button`), which directly violates the CLAUDE.md guideline: *"When DOM access is unavoidable, use minimal generic selectors — never class names tied to LinkedIn's layout."*

Since `aria-label*='More'` already constrains the match, the class prefix adds no resilience and is more brittle than it appears. Remove this selector from the fallback list; the three preceding `aria-label`-based and `:has-text` selectors already provide sufficient coverage without relying on a layout class.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +80 to +85
state = load_state()
events = await _poll_once(extractor, state)

if events:
add_events(events)
save_state(state)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 State is reloaded from disk on every poll cycle, discarding in-memory updates on failure

load_state() is called inside the lock on every cycle. If save_state(state) fails silently (it catches all exceptions) after state was mutated, the next cycle reloads the pre-update state from disk and the same events are re-emitted to clients.

Consider loading the state once at poller startup and keeping it in memory across cycles, only reloading from disk if the in-memory state is unavailable. This pattern also eliminates repeated disk I/O.

Prompt To Fix With AI
This is a comment left during a code review.
Path: linkedin_mcp_server/notifications/poller.py
Line: 80-85

Comment:
**State is reloaded from disk on every poll cycle, discarding in-memory updates on failure**

`load_state()` is called inside the lock on every cycle. If `save_state(state)` fails silently (it catches all exceptions) after `state` was mutated, the next cycle reloads the pre-update state from disk and the same events are re-emitted to clients.

Consider loading the state once at poller startup and keeping it in memory across cycles, only reloading from disk if the in-memory state is unavailable. This pattern also eliminates repeated disk I/O.

How can I resolve this? If you propose a fix, please make it concise.

@stickerdaniel
Copy link
Copy Markdown
Owner

@aspectrr thanks for your PRs! I would polish the recently added messaging tools first before adding new features. (307 / 304)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants